human opponent
People Are Highly Cooperative with Large Language Models, Especially When Communication Is Possible or Following Human Interaction
Niszczota, Paweł, Grzegorczyk, Tomasz, Pastukhov, Alexander
Machines driven by large language models (LLMs) have the potential to augment humans across various tasks, a development with profound implications for business settings where effective communication, collaboration, and stakeholder trust are paramount. To explore how interacting with an LLM instead of a human might shift cooperative behavior in such settings, we used the Prisoner's Dilemma game -- a surrogate of several real-world managerial and economic scenarios. In Experiment 1 (N=100), participants engaged in a thirty-round repeated game against a human, a classic bot, and an LLM (GPT, in real-time). In Experiment 2 (N=192), participants played a one-shot game against a human or an LLM, with half of them allowed to communicate with their opponent, enabling LLMs to leverage a key advantage over older-generation machines. Cooperation rates with LLMs -- while lower by approximately 10-15 percentage points compared to interactions with human opponents -- were nonetheless high. This finding was particularly notable in Experiment 2, where the psychological cost of selfish behavior was reduced. Although allowing communication about cooperation did not close the human-machine behavioral gap, it increased the likelihood of cooperation with both humans and LLMs equally (by 88%), which is particularly surprising for LLMs given their non-human nature and the assumption that people might be less receptive to cooperating with machines compared to human counterparts. Additionally, cooperation with LLMs was higher following prior interaction with humans, suggesting a spillover effect in cooperative behavior. Our findings validate the (careful) use of LLMs by businesses in settings that have a cooperative component.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
AI can be more persuasive than humans in debates, scientists find
Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found. Experts say the results are concerning, not least as it has potential implications for election integrity. "If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic," said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time. "I would be surprised if malicious actors hadn't already started to use these tools to their advantage to spread misinformation and unfair propaganda," Salvi said.
- Europe > Switzerland > Vaud > Lausanne (0.25)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Government > Voting & Elections (0.92)
- Media > News (0.56)
DeepMind's Latest AI Trounces Human Players at the Game 'Stratego'
Yet to navigate our unpredictable world, it needs to learn to make choices with imperfect information--as we do every single day. DeepMind just took a stab at solving this conundrum. The trick was to interweave game theory into an algorithmic strategy loosely based on the human brain called deep reinforcement learning. The result, DeepNash, toppled human experts in a highly strategic board game called Stratego. A notoriously difficult game for AI, Stratego requires multiple strengths of human wit: long-term thinking, bluffing, and strategizing, all without knowing your opponent's pieces on the board.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.63)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.63)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.57)
How Much AI Is Really Used in Video Games?
How close is the relationship between AI technology and video game development? From the exploratory adventure of open-world games to the comforting loop of online slots, the majority of video games use AI in some way, shape, or form; be it NPC interaction, enemy behavior, or otherwise. Contrary to its portrayal in most forms of entertainment media, AI isn't restricted to robots and supercomputers. Instead, it's a relatively ubiquitous technology, especially when it comes to gaming. As a matter of fact, you could go as far as saying that AI and video games likely wouldn't exist without each other.
Architecting and Visualizing Deep Reinforcement Learning Models
Neuwirth, Alexander, Riley, Derek
To meet the growing interest in Deep Reinforcement Learning (DRL), we sought to construct a DRL-driven Atari Pong agent and accompanying visualization tool. Existing approaches do not support the flexibility required to create an interactive exhibit with easily-configurable physics and a human-controlled player. Therefore, we constructed a new Pong game environment, discovered and addressed a number of unique data deficiencies that arise when applying DRL to a new environment, architected and tuned a policy gradient based DRL model, developed a real-time network visualization, and combined these elements into an interactive display to help build intuition and awareness of the mechanics of DRL inference.
- North America > United States > Wisconsin > Milwaukee County > Milwaukee (0.05)
- North America > Canada > Ontario > Toronto (0.04)
Am I arguing with a machine? AI debaters highlight need for transparency
Can a machine powered by artificial intelligence (AI) successfully persuade an audience in debate with a human? Researchers at IBM Research in Haifa, Israel, think so. They describe the results of an experiment in which a machine engaged in live debate with a person. Audiences rated the quality of the speeches they heard, and ranked the automated debater's performance as being very close to that of humans. Such an achievement is a striking demonstration of how far AI has come in mimicking human-level language use (N.
- Asia > Middle East > Israel > Haifa District > Haifa (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.15)
- North America > United States > California > Alameda County > Berkeley (0.05)
Tencent details how its MOBA-playing AI system beats 99.81% of human opponents
In August, Tencent announced it had developed an AI system capable of defeating teams of pros in a five-on-five match in Honor of Kings (or Arena of Valor, depending on the region). This was a noteworthy achievement -- Honor of Kings occupies the video game subgenre known as multiplayer online battle arena games (MOBAs), which are incomplete information games in the sense that players are unaware of the actions other players choose. The endgame, then, isn't merely AI that achieves Honor of Kings superhero performance, but insights that might be used to develop systems capable of solving some of society's toughest challenges. A paper published this week peels back the layers of Tencent's technique, which the coauthors describe as "highly scalable." They claim its novel strategies enable it to explore the game map "efficiently," with an actor-critic architecture that self-improves over time.
DeepMind's StarCraft 2 AI is now better than 99.8 percent of all human players
DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab's more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature. Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game's playable races, adding to the complexity of the game at the upper echelons of pro play.
An A.I. has beat humans at yet another of our own games
Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged by consensus as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multiagent challenges. Over the course of a decade and numerous competitions 1–3, the best results have been made possible by hand-crafting major elements of the system, simplifying important aspects of the game, or using superhuman capabilities 4. Even with these modifications, no previous system has come close to rivalling the skill of top players in the full game. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counterstrategies, each represented by deep neural networks5,6. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.
Fermat's Library Some Studies In Machine Learning Using the Game of Checkers annotated/explained version.
This is his seminal paper originally published in 1959 where Samuel sets out to build a program that can learn to play the game of checkers. Checkers is an extremely complex game - as a matter of fact the game has roughly 500 billion billion possible positions - that using a brute force only approach to solve it is not satisfactory. Samuel's program was based on Claude Shannon's minimax strategy to find the best move from a given current position. In this paper he describes how a machine could look ahead "by evaluating the resulting board positions much as a human player might do".